The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
机器学习和临床研究社区利用现实世界数据(RWD)的方法,包括电子健康记录中捕获的数据(EHR)截然不同。虽然临床研究人员谨慎使用RWD进行临床研究,但用于医疗团队的ML会消费公共数据集,并以最少的审查来开发新算法。这项研究通过开发和验证ML-DQA来弥合这一差距,ML-DQA是基于RWD最佳实践的数据质量保证框架。 ML-DQA框架适用于两个地理位置的五个ML项目,分别是不同的医疗状况和不同的人群。在这五个项目中,共收集了247,536名患者的RWD,共有2,999项质量检查和24份质量报告。出现了五种可推广的实践:所有项目都使用类似的方法来分组冗余数据元素表示;所有项目都使用自动实用程序来构建诊断和药物数据元素;所有项目都使用了一个共同的基于规则的转换库;所有项目都使用统一的方法将数据质量检查分配给数据元素;所有项目都使用类似的临床裁决方法。包括临床医生,数据科学家和受训者在内的平均有5.8个人参与每个项目实施ML-DQA,每个项目平均进行了23.4个数据元素。这项研究证明了ML-DQA在医疗项目中的重要性作用,并为团队提供了开展这些基本活动的框架。
translated by 谷歌翻译
目的:创建和评估人工智能深度学习平台(Oraicle)的准确性,能够仅使用视网膜眼睛图像来预测个人整体5年心血管风险(CVD)以及组件风险因素的相对贡献,这些因素包括这一点风险。方法:我们从47,236个患者就诊的数据库中使用了165,907个视网膜图像。最初,每个图像与生物识别数据年龄,种族,性别,性,存在和持续时间HDL/LDL比以及任何CVD事件Wtihin 5年的视网膜图像采集5年。计算了基于Framingham方程的风险评分。还确定了个人和整体人口的实际CVD事件率。最后,仅使用年龄,种族,性别加上视网膜图像对Oraicle进行训练。结果:与基于弗雷明厄姆的分数相比,在接下来的5年中,Oraicle在预测心血管事件方面的准确性高达12%,尤其是对于最高风险的人群。每个限制性模型的可靠性和准确性对Oraicle的性能均优于最佳性能,表明它使用了两组数据中的数据来得出其最终结果。结论:视网膜摄影是便宜的,只需要最少的培训才能获得全自动,廉价的摄像头系统,现在可以广泛使用。因此,基于AI的CVD风险算法(例如Oraicle)有望使CV健康筛查更加准确,更加相似,并且更容易访问。此外,Oraicle评估构成个人总体风险的组件相对贡献的独特能力将根据个人的特定需求为治疗决策提供信息,从而增加了阳性健康结果的可能性。
translated by 谷歌翻译
大型视力模型的无监督预训练方法已显示出可以提高下游监督任务的性能。为卫星图像开发类似的技术带来了重要的机会,因为未标记的数据很丰富,并且固有的时间和多光谱结构提供了途径,以进一步改善现有的训练策略。在本文中,我们提出了Satmae,这是基于蒙面自动编码器(MAE)的时间或多光谱卫星图像的预训练框架。为了利用时间信息,我们包括一个时间嵌入以及跨时间独立掩盖图像贴片。此外,我们证明将多光谱数据编码为具有不同光谱位置编码的频段组是有益的。我们的方法在基准数据集(最高$ \ uparrow $ 7 \%)上的监督学习绩效方面都对先前最先前的技术产生了强大的改进,以及在下游遥感任务(包括土地)上的转移学习绩效封面分类(最多$ \ uparrow $ 14 \%)和语义细分。
translated by 谷歌翻译
高分辨率卫星图像中的对象检测是在许多环境和社会经济监测应用中的地面调查数据收集中的可扩展替代品。然而,由于购买图像和计算的高成本,对大型地理位置的对象检测仍然可能会昂贵。灵感来自传统调查数据收集策略,我们提出了一种通过抽样估计对象计数统计数据的方法。鉴于成本预算,我们的方法通过从学习的提案分布中抽样选择少量代表性区域。使用重要性采样,我们能够在处理仅与详尽的方法相比仅在图像的一小部分图像后准确估计对象计数。我们凭经验表明,拟议的框架在估计美国和非洲的建筑物数量,肯尼亚的汽车数量,在孟加拉国的砖窑和美国的游泳池中达到了强大的表现,同时需要少于0.01%的卫星图像彻底的方法。
translated by 谷歌翻译
对联合国可持续发展目标的进展(SDGS)因关键环境和社会经济指标缺乏数据而受到阻碍,其中历史上有稀疏时间和空间覆盖率的地面调查。机器学习的最新进展使得可以利用丰富,频繁更新和全球可用的数据,例如卫星或社交媒体,以向SDGS提供洞察力。尽管有希望的早期结果,但到目前为止使用此类SDG测量数据的方法在很大程度上在不同的数据集或使用不一致的评估指标上进行了评估,使得难以理解的性能是改善,并且额外研究将是最丰富的。此外,处理卫星和地面调查数据需要域知识,其中许多机器学习群落缺乏。在本文中,我们介绍了3个SDG的3个基准任务的集合,包括与经济发展,农业,健康,教育,水和卫生,气候行动和陆地生命相关的任务。 15个任务中的11个数据集首次公开发布。我们为Acceptandbench的目标是(1)降低机器学习界的进入的障碍,以促进衡量和实现SDGS; (2)提供标准基准,用于评估各种SDG的任务的机器学习模型; (3)鼓励开发新颖的机器学习方法,改进的模型性能促进了对SDG的进展。
translated by 谷歌翻译
使用变压器 - 卷积神经网络(CNN)的视觉显着性预测具有显着的高级计算模型,以实现显着性预测。但是,准确模拟人类皮层中视觉注意的机制仍然是一个学术挑战。将人类视力的属性集成到CNN体系结构的设计中,这是至关重要的,从而导致感知上更相关的显着性预测。由于CNN体系结构的固有归纳偏见,因此缺乏足够的长距离上下文编码能力。这阻碍了基于CNN的显着性模型,无法捕获模仿人类观看行为的属性。通过利用自我发项机制来编码远程信息,变形金刚在编码远程信息方面表现出了巨大潜力。在本文中,我们提出了一个新颖的显着性模型,该模型将变压器组件集成到CNNs以捕获远程上下文视觉信息。实验结果表明,变压器为显着性预测提供了附加的价值,从而增强了其在性能中的感知相关性。我们提出的使用变压器的显着性模型在公共基准和显着性预测模型的竞争上取得了卓越的成果。我们提出的显着模型TransAlnet的源代码可在以下网址获得:https://github.com/ljovo/transalnet
translated by 谷歌翻译
高分辨率卫星图像已证明是可用于广泛的任务,包括衡量全球人口,当地经济生计和生物多样性,其中许多其他任务。不幸的是,高分辨率图像既不经常收集,购买昂贵,难以高效,有效地缩放这些下游任务在两次和空间。我们提出了一种新的条件像素综合模型,它使用丰富,低成本,低分辨率的图像,在位置和时间内产生准确的高分辨率图像。我们表明,我们的模型在钥匙下游任务 - 对象计数上达到了照片 - 现实的样本质量和竞争基线的竞争基线 - 特别是在地面上的条件正在快速变化的地理位置中。
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译